9 research outputs found

    Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing

    Full text link
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the user's cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201

    eyeGUI: a novel framework for eye-controlled user interfaces

    No full text
    The user interfaces and input events are typically composed of mouse and keyboard interactions in generic applications. Eye-controlled applications need to revise these interactions to eye gestures, and hence design and optimization of interface elements becomes a substantial feature. In this work, we propose a novel eyeGUI framework, to support the development of such interactive eye-controlled applications with many significant aspects, like rendering, layout, dynamic modification of content, support of graphics and animation

    GazeTheKey: interactive keys to integrate word predictions for gaze-based text entry

    No full text
    In the conventional keyboard interfaces for eye typing, the functionalities of the virtual keys are static, i.e., user’s gaze at a particular key simply translates the associated letter as user’s input. In this work we argue the keys to be more dynamic and embed intelligent predictions to support gazebased text entry. In this regard, we demonstrate a novel "GazeTheKey" interface where a key not only signifies the input character, but also predict the relevant words that could be selected by user’s gaze utilizing a two-step dwell time

    Leveraging error correction in voice-based text entry by Talk-and-Gaze

    No full text
    We present the design and evaluation of Talk-and-Gaze (TaG), a method for selecting and correcting errors with voice and gaze. TaG uses eye gaze to overcome the inability of voiceonly systems to provide spatial information. The user’s point of gaze is used to select an erroneous word either by dwelling on the word for 800 ms (D-TaG) or by uttering a “select” voice command (V-TaG). A user study with 12 participants compared D-TaG, V-TaG, and a voice-only method for selecting and correcting words. Corrections were performed more than 20% faster with D-TaG compared to the V-TaG or voice-only methods. As well, D-TaG was observed to require 24% less selection effort than V-TaG and 11% less selection effort than voice-only error correction. D-TaG was well received in a subjective assessment with 66% of users choosing it as their preferred choice for error correction in voice-based text entry

    Testing a Commercial BCI Device for In-Vehicle Interfaces Evaluation : A Simulator and Real-WorldDrivingStudy

    Get PDF
    International audienceThis study is assessing the sensitivity of an affordable BCI device in the context of driver distraction in both low-fidelity simulator and real-world driving environments. Twenty-three participants performed a car following task while using a smartphone application involving a range of generic smartphone widgets. On the first hand, the results demonstrated that secondary task completion time is a fairly robust metric as it is sensitive to user-interfaces style while being consistent between the two driving environments. On the second hand, while the BCI attention level metric was not sensitive to the different user-interfaces, we found it to be significantly higher in the real-driving environment than in the simulated one, which reproduces findings obtained with medical-grade sensors

    Hands-free web browsing: enriching the user experience with gaze and voice modality

    No full text
    Hands-free browsers provide an effective tool for Web interaction and accessibility, overcoming the need for conventional input sources. Current approaches to hands-free interaction are primarily categorized in either voice or gaze-based modality. In this work, we investigate how these two modalities could be integrated to provide a better hands-free experience for end-users. We demonstrate a multimodal browsing approach combining eye gaze and voice inputs for optimized interaction, and to suffice user preferences with unimodal benefits. The initial assessment with five participants indicates improved performance for the multimodal prototype in comparison to single modalities for hands-free Web browsing.</p

    Analyzing the impact of cognitive load in evaluating gaze-based typing

    No full text
    Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure. We evaluate three variations of gaze-based virtual keyboards, which implement variable designs in terms of word suggestion positioning. The conventional text entry metrics indicate no significant difference in the performance of the different keyboard designs. However, STFT (Short-time Fourier Transform) based analysis of EEG signals indicate variances in the mental workload of participants while interacting with these designs. Moreover, the EEG analysis provides insights into the users cognition variation for different typing phases and intervals, which should be considered in order to improve eye typing usability

    A Multimodal dataset for authoring and editing multimedia content: The MAMEM project

    No full text
    In this report we present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals along with demographic, clinical and behavioral data collected from 36 individuals (18 able-bodied and 18 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. Alongside these data we also include evaluation reports both from the subjects and the experimenters as far as the experimental procedure andcollected dataset are concerned. We believe that the presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society
    corecore